deterministic gradient descent
Chaotic Regularization and Heavy-Tailed Limits for Deterministic Gradient Descent
Recent studies have shown that gradient descent (GD) can achieve improved generalization when its dynamics exhibits a chaotic behavior. However, to obtain the desired effect, the step-size should be chosen sufficiently large, a task which is problem dependent and can be difficult in practice. In this study, we incorporate a chaotic component to GD in a controlled manner, and introduce \emph{multiscale perturbed GD} (MPGD), a novel optimization framework where the GD recursion is augmented with chaotic perturbations that evolve via an independent dynamical system. We analyze MPGD from three different angles: (i) By building up on recent advances in rough paths theory, we show that, under appropriate assumptions, as the step-size decreases, the MPGD recursion converges weakly to a stochastic differential equation (SDE) driven by a heavy-tailed L\'{e}vy-stable process.
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors. In particular, it shows that if the objective function exhibit multiscale behaviors, then in a large learning rate regime which only resolves the macroscopic but not the microscopic details of the objective, the deterministic GD dynamics can become chaotic and convergent not to a local minimizer but to a statistical distribution. In this sense, deterministic GD resembles stochastic GD even though no stochasticity is injected. A sufficient condition is also established for approximating this long-time statistical limit by a rescaled Gibbs distribution, which for example allows escapes from local minima to be quantified. Both theoretical and numerical demonstrations are provided, and the theoretical part relies on the construction of a stochastic map that uses bounded noise (as opposed to Gaussian noise).
Review for NeurIPS paper: Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
Additional Feedback: [After rebuttal] I appreciate the additional explanations in the rebuttal. I think the example (a more complete version) will go a long way in improving the paper, but as is presented I think not enough details is given for a proper evaluation, thus I look forward to reading a revised version of this work. Note that my tautology comment is not saying that the proof is trivial, but saying the way it is written masks the potential insights the proof may give, in particular, there should be a result that shows that such a limit in Cons 1 exists under some general conditions characterising the data and the model architecture. I believe the example provided in the rebuttal may potentially be useful for formalising this. On first reading, these conditions appear not well-motivated.
Review for NeurIPS paper: Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
Before the author response, all the reviewers seem agree that the results were quite interesting (and I agree), but had a concern about the connection to ML. The author response included examples which mostly addressed this concern, so two reviewers recommended acceptance, while another (reviewer 1) recommended rejection, but was borderline. However, I feel the remaining concerns by reviewer 1 are rather minor.
Chaotic Regularization and Heavy-Tailed Limits for Deterministic Gradient Descent
Recent studies have shown that gradient descent (GD) can achieve improved generalization when its dynamics exhibits a chaotic behavior. However, to obtain the desired effect, the step-size should be chosen sufficiently large, a task which is problem dependent and can be difficult in practice. In this study, we incorporate a chaotic component to GD in a controlled manner, and introduce \emph{multiscale perturbed GD} (MPGD), a novel optimization framework where the GD recursion is augmented with chaotic perturbations that evolve via an independent dynamical system. We analyze MPGD from three different angles: (i) By building up on recent advances in rough paths theory, we show that, under appropriate assumptions, as the step-size decreases, the MPGD recursion converges weakly to a stochastic differential equation (SDE) driven by a heavy-tailed L\'{e}vy-stable process. Empirical results are provided to demonstrate the advantages of MPGD.
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors. In particular, it shows that if the objective function exhibit multiscale behaviors, then in a large learning rate regime which only resolves the macroscopic but not the microscopic details of the objective, the deterministic GD dynamics can become chaotic and convergent not to a local minimizer but to a statistical distribution. In this sense, deterministic GD resembles stochastic GD even though no stochasticity is injected. A sufficient condition is also established for approximating this long-time statistical limit by a rescaled Gibbs distribution, which for example allows escapes from local minima to be quantified. Both theoretical and numerical demonstrations are provided, and the theoretical part relies on the construction of a stochastic map that uses bounded noise (as opposed to Gaussian noise).
Noise-induced degeneration in online learning
Sato, Yuzuru, Tsutsui, Daiji, Fujiwara, Akio
The gradient descent is the simplest optimisation algorithm represented by gradient dynamics in a potential. When the input data is finite, gradient descent dynamics fluctuates due to the finite size effects, and is called stochastic gradient descent. In this paper, we study stability of stochastic gradient descent dynamics from the viewpoint of dynamical systems theory. Learning is characterised as nonautonomous dynamics driven by uncertain input from the external, and as multi-scale dynamics which consists of slow memory dynamics and fast system dynamics. When the uncertain input sequences are modelled by stochastic processes, dynamics of learning is described by a random dynamical system. In contrast to the traditional Fokker-Planck approaches [5, 15], the random dynamical system approaches enable the study not only of stationary distributions and global statistics, but also of the pathwise structure of stochastic dynamics. Based on nonautonomous and random dynamical system theory, it is possible to analyse stability and bifurcation in machine learning.
- North America > United States > California (0.14)
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- North America > United States > California (0.14)
- North America > United States > Connecticut > New Haven County > New Haven (0.04)